As language models (LMs) scale, they develop many novel behaviors, good and bad, exacerbating the need to evaluate how they behave. Prior work creates evaluations with crowdwork (which is time-consuming and expensive) or existing data sources (which are not always available). Here, we automatically generate evaluations with LMs. We explore approaches with varying amounts of human effort, from instructing LMs to write yes/no questions to making complex Winogender schemas with multiple stages of LM-based generation and filtering. Crowdworkers rate the examples as highly relevant and agree with 90-100% of labels, sometimes more so than corresponding human-written datasets. We generate 154 datasets and discover new cases of inverse scaling where LMs get worse with size. Larger LMs repeat back a dialog user's preferred answer ("sycophancy") and express greater desire to pursue concerning goals like resource acquisition and goal preservation. We also find some of the first examples of inverse scaling in RL from Human Feedback (RLHF), where more RLHF makes LMs worse. For example, RLHF makes LMs express stronger political views (on gun rights and immigration) and a greater desire to avoid shut down. Overall, LM-written evaluations are high-quality and let us quickly discover many novel LM behaviors.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
Reinforcement learning in partially observable domains is challenging due to the lack of observable state information. Thankfully, learning offline in a simulator with such state information is often possible. In particular, we propose a method for partially observable reinforcement learning that uses a fully observable policy (which we call a state expert) during offline training to improve online performance. Based on Soft Actor-Critic (SAC), our agent balances performing actions similar to the state expert and getting high returns under partial observability. Our approach can leverage the fully-observable policy for exploration and parts of the domain that are fully observable while still being able to learn under partial observability. On six robotics domains, our method outperforms pure imitation, pure reinforcement learning, the sequential or parallel combination of both types, and a recent state-of-the-art method in the same setting. A successful policy transfer to a physical robot in a manipulation task from pixels shows our approach's practicality in learning interesting policies under partial observability.
translated by 谷歌翻译
3D对象检测是自动驾驶的重要组成部分,深层神经网络(DNNS)已达到此任务的最新性能。但是,深层模型臭名昭著,因为将高置信度得分分配给分布(OOD)输入,即未从训练分布中得出的输入。检测OOD输入是具有挑战性的,对于模型的安全部署至关重要。已经针对分类任务进行了广泛研究OOD检测,但是它尚未对对象检测任务,特别是基于激光雷达的3D对象检测的注意力。在本文中,我们关注基于激光雷达的3D对象检测的OOD输入的检测。我们制定了OOD输入对于对象检测的含义,并提议适应几种OOD检测方法进行对象检测。我们通过提出的特征提取方法来实现这一目标。为了评估OOD检测方法,我们开发了一种简单但有效的技术,用于为给定的对象检测模型生成OOD对象​​。我们基于KITTI数据集的评估表明,不同的OOD检测方法具有检测特定OOD对象​​的偏差。它强调了联合OOD检测方法的重要性以及在这个方向上进行更多研究。
translated by 谷歌翻译
作为网络防御的重要工具,欺骗正在迅速发展,并补充了现有的周边安全措施,以迅速检测出漏洞和数据盗窃。限制欺骗使用的因素之一是手工生成逼真的人工制品的成本。但是,机器学习的最新进展为可扩展的,自动化的现实欺骗创造了机会。本愿景论文描述了开发模型所涉及的机会和挑战,以模仿IT堆栈的许多共同元素以造成欺骗效应。
translated by 谷歌翻译
目的:大大缩短定量3D化学交换饱和转移(CEST)和半固体磁化转移(MT)成像所需的采集时间,并允许快速化学交换参数图重建。方法:三维CEST和MT磁共振指纹(MRF)数据集的L-精氨酸幻象,全脑,全脑和小腿肌肉的健康志愿者,癌症患者和心脏病患者是使用3T临床扫描仪在3T不同的位点使用3T临床扫描仪获得的3种不同的扫描仪模型和线圈。然后,设计和训练了一个生成的对抗网络监督框架(GAN-CEST),以学习从减少的输入数据空间到定量交换参数空间的映射,同时保留感知和定量内容。结果:GAN-CEST 3D采集时间为42-52秒,比CEST-MRF短70%。整个大脑的定量重建需要0.8秒。在地面真相和基于GAN的L-精氨酸浓度和pH值之间观察到了极好的一致性(Pearson的R> 0.97,NRMSE <1.5%)。来自脑肿瘤受试者的gan-cest图像产生的半固体量分数和汇率NRMSE为3.8 $ \ pm $ 1.3%和4.6 $ \ pm $ 1.3%,SSIM和96.3 $ \ pm $ \ pm $ 1.6%和95.0 $ \ pm $ 2.4%。半固体交换参数的NRMSE <7%和SSIM> 94%的小腿肌肉交换参数的映射。与MRF相比,在具有较大敏感性伪像的区域中,Gan-Cest表现出改善的性能和噪声降低。结论:Gan-Cest可以大大减少定量半固体MT/CEST映射的获取时间,同时即使在训练过程中无法使用的病理和扫描仪模型时,也可以保持性能。
translated by 谷歌翻译
在本文中,我们研究了多服务器边缘计算中基于区块链的联合学习(BFL)的新延迟优化问题。在此系统模型中,分布式移动设备(MDS)与一组Edge服务器(ESS)通信,以同时处理机器学习(ML)模型培训和阻止开采。为了协助ML模型培训用于资源受限的MD,我们制定了一种卸载策略,使MD可以将其数据传输到相关的ESS之一。然后,我们基于共识机制在边缘层上提出了一个新的分散的ML模型聚合解决方案,以通过基于对等(P2P)基于基于的区块链通信构建全局ML模型。区块链在MDS和ESS之间建立信任,以促进可靠的ML模型共享和合作共识形成,并能够快速消除由中毒攻击引起的操纵模型。我们将延迟感知的BFL作为优化,旨在通过联合考虑数据卸载决策,MDS的传输功率,MDS数据卸载,MDS的计算分配和哈希功率分配来最大程度地减少系统延迟。鉴于离散卸载和连续分配变量的混合作用空间,我们提出了一种具有参数化优势演员评论家算法的新型深度强化学习方案。从理论上讲,我们根据聚合延迟,迷你批量大小和P2P通信回合的数量来表征BFL的收敛属性。我们的数值评估证明了我们所提出的方案优于基线,从模型训练效率,收敛速度,系统潜伏期和对模型中毒攻击的鲁棒性方面。
translated by 谷歌翻译
Here, we demonstrate how machine learning enables the prediction of comonomers reactivity ratios based on the molecular structure of monomers. We combined multi-task learning, multi-inputs, and Graph Attention Network to build a model capable of predicting reactivity ratios based on the monomers chemical structures.
translated by 谷歌翻译
Modern deep neural networks have achieved superhuman performance in tasks from image classification to game play. Surprisingly, these various complex systems with massive amounts of parameters exhibit the same remarkable structural properties in their last-layer features and classifiers across canonical datasets. This phenomenon is known as "Neural Collapse," and it was discovered empirically by Papyan et al. \cite{Papyan20}. Recent papers have theoretically shown the global solutions to the training network problem under a simplified "unconstrained feature model" exhibiting this phenomenon. We take a step further and prove the Neural Collapse occurrence for deep linear network for the popular mean squared error (MSE) and cross entropy (CE) loss. Furthermore, we extend our research to imbalanced data for MSE loss and present the first geometric analysis for Neural Collapse under this setting.
translated by 谷歌翻译
Machine Reading Comprehension has become one of the most advanced and popular research topics in the fields of Natural Language Processing in recent years. The classification of answerability questions is a relatively significant sub-task in machine reading comprehension; however, there haven't been many studies. Retro-Reader is one of the studies that has solved this problem effectively. However, the encoders of most traditional machine reading comprehension models in general and Retro-Reader, in particular, have not been able to exploit the contextual semantic information of the context completely. Inspired by SemBERT, we use semantic role labels from the SRL task to add semantics to pre-trained language models such as mBERT, XLM-R, PhoBERT. This experiment was conducted to compare the influence of semantics on the classification of answerability for the Vietnamese machine reading comprehension. Additionally, we hope this experiment will enhance the encoder for the Retro-Reader model's Sketchy Reading Module. The improved Retro-Reader model's encoder with semantics was first applied to the Vietnamese Machine Reading Comprehension task and obtained positive results.
translated by 谷歌翻译